专利摘要:
The present invention relates to a method (100) for predicting the future realization of at least one state that an object can take from a source database, storing values for past occurrences of said at least one state. variables relating to said object, said method (100) comprising the following steps: generating (108) at least two classifiers according to two different data classification algorithms, for each of said classifiers, automatic learning (112), and selecting (116) the best classifier among said classifiers; said method (100) further comprising a so-called detection phase (120) comprising: - an update (122) over time of said source database, and - at least one step (124) of prediction by said best classifier from said updated source database.
公开号:FR3036515A1
申请号:FR1554441
申请日:2015-05-19
公开日:2016-11-25
发明作者:Jean-Michel Cambot;Remi Coletta;Loic Linais;Emmanuel Castanier
申请人:TELLMEPLUS;
IPC主号:
专利说明:

[0001] FIELD OF THE INVENTION The present invention relates to a method for predicting the realization of a state of an object, before said state reaches a state of an object. realized. It also relates to a system implementing such a method. The field of the invention is the field of predicting the occurrence of a predetermined event concerning an object, and in particular of a failure of an apparatus or an apparatus member before said failure takes place. State of the Art Whatever their level of improvement, industrial machines are regularly subject to breakdowns. When they are deployed in their operating environment, the breakdowns of these machines have as a first consequence a decrease or an interruption of the functionality that they offer, whatever the field considered. There are currently methods and systems for detecting a failure of a machine, and more generally a state of an object when said state occurs. These methods and systems are based on one or more sensors arranged on the target machine and intended to detect the failure of the machine after the occurrence of said failure takes place. These methods have several disadvantages. On the one hand, these methods do not make it possible to avoid a decrease or an interruption of the functionality performed by the machine. On the other hand, the detection of the failure occurring after its completion, the resolution of the failure can be achieved quickly, which causes a decline / lack of functionality for a significant period. In an attempt to overcome these disadvantages, methods and systems for prediction of failure have been developed. These methods implement an algorithm for predicting a failure of a target machine taking into account various data relating to said target machine. However, these methods and systems also have drawbacks: they are developed specifically to a type of machine, are not very flexible, and provide inaccurate results. An object of the present invention is to overcome the aforementioned drawbacks. Another object of the present invention is to provide a method and a system for predicting a state of a more flexible object. It is also an object of the present invention to provide a method and system for predicting a state of an object that can be used for all types of objects with little modification. Finally, another object of the present invention is to provide a method and a system for predicting a state of an object providing more accurate results. SUMMARY OF THE INVENTION At least one of these objects is achieved by a method of predicting the achievement of at least one state that an object can take, before said state is realized, from a base of data, said source, storing, for at least one, in particular several, past occurrence (s) of said at least one state, values of at least one, in particular of several, relative variable (s) ) to said object, determined before said, or each of said occurrence (s) of said state, said method comprising the following steps: - generating at least two classifiers according to two different data classification algorithms, - for each of said classifiers, automatically learning on a first part of said source database, selecting one of said classifiers from a classifier, said best classifier, providing the best prediction performance on a second part of said database; source data, by comparison of the results provided by each classifier; said method further comprising a so-called detection phase, comprising: updating said source database over time with at least one new value of said variable, at least one step of predicting a state, by said best classifier, from said updated source database. Thus, to detect the future realization of a state of an object, the prediction method according to the invention makes it possible to generate and test several prediction classifiers from the data relating to said object, and in particular on past occurrences of said state, and to choose the classifier providing the best prediction result.
[0002] Consequently, the method according to the invention is more flexible because it makes it possible to adapt to any type of object for the detection of any state whose past occurrences are known, by proposing a learning of each classifier directly according to the object data.
[0003] The method according to the invention can also be used for all types of objects, with few modifications, since it makes it possible to automatically select the most suitable classifier for each of several classifiers using different algorithms. Finally, the method according to the invention makes it possible to make a more precise prediction of the realization of a state of an object because the prediction is performed with the classifier which, among several classifiers tested, provides the best prediction result. Of course, each of the first and second portions of the source database includes at least one occurrence, particularly a plurality of occurrences, passed for the at least one state of the object. By "classifier" is meant an algorithm or a family of statistical ranking algorithm. This concept is well known to those skilled in the art as such in the field of statistical classification. It is therefore not necessary to detail this notion further. By "learning" is meant the process of determining, in particular by iteration, the coefficients of a classifier based on known input data and known output data. This concept is also well known to those skilled in the art as such in the field of statistical classification. It is therefore unnecessary 3036515 - 4 - to further detail this concept. It is possible to find more details on learning on the page whose address is the following: http://en.wikipedia.org/wiki/Reference: Machine Learning) In the rest of the description, the object for which the prediction is carried out can be called the target object to avoid editorial heaviness. Advantageously, the method according to the invention may further comprise at least one iteration of a step, called verification step, to check in time that the best classifier remains the one which, among all the 10 classifiers generated, provides the best performance of prediction, said verification step comprising the learning and selection steps carried out on said database updated at the time of said iteration of said verification step. This verification step is performed after one or more prediction steps. Thus, the method according to the invention makes it possible to monitor in time that the classifier selected at the beginning of the process remains the one that provides the best prediction result. This characteristic of the process according to the invention is particularly advantageous. Indeed, thanks to this feature, the prediction method according to the invention is not based on learning a classifier learned once and for all, but continues to learn as and when. This functionality makes it possible to take into account the evolution in time of the target object, such as for example the aging of the target object, a modification of the use of the target object, etc. The verification step can be triggered by an operator and / or automatically at a predetermined frequency, for example depending on the iteration number of the detection phase.
[0004] According to a nonlimiting embodiment, the step of selecting the best classifier can comprise: a measurement, for each classifier: a piece of data, referred to as precision, relating to an error rate during detecting past occurrences of at least one state, of a so-called recall datum, relating to the number of past occurrences of at least one state detected by said classifier; the selection of the best classifier being carried out as a function of said precision data and / or of said recall data. Thus, the method according to the invention makes it possible to better take into account the results of each classifier in order to choose the classifier providing the best prediction result. Advantageously, the method according to the invention may further comprise, after the automatic learning step, a so-called cross-validation step, testing at least one, in particular each, classifier on a third part of said base. source data. Of course, this third part of the source database comprises at least one occurrence, in particular a multitude of occurrences, passed (s) for the at least one state of the object.
[0005] This step of cross-validation, also called cross validation, validates the learning of a classifier made on the first part of the source database, on a third part, different from the first part. This step of cross validation makes it possible more particularly to test the stability of each classifier obtained following the learning step. There are different cross-validation techniques that can be used for a classifier, such as for example the technique known as "testing and validation", the technique known as "holdout method", the technique known as "k- fold cross-validation "or again the technique known as" leave-one-out cross-validation ". The first part of the source database, used for learning, can be called learning part. It can be 60% or more of the source database. The second part of the database, different from the first part, can be called part of selection. The second part of the database can correspond to 20% of the source database. The third part of the database, different from the first and the second part, can be called part of test or cross-validation or cross-validation. The second part of the database can correspond to 20% of the source database. The first part and the third part of the source database may be different for each classifier. On the other hand, the second part of the source database used during the selection step is identical for each classifier. The generation step may advantageously comprise, for at least one classifier, a step of adjusting / entering a parameter relating to the architecture of said classifier. Such a parameter can be or include a maximum / minimum number of nodes in the classifier, a maximum / minimum depth of said classifier, a number of trees in the classifier, and so on. etc.
[0006] Such an adjustment step makes it possible to apply at least one constraint, identical or different, for at least one, in particular each, classifier and thus to control / adjust the computer resources necessary for carrying out the method according to the invention. , for example in terms of memory and computing power, and / or the execution time of the method according to the invention. It is thus possible to adjust and further customize the method according to the invention to each object, and more generally to each case.
[0007] Advantageously, the method according to the invention may comprise, before the learning step, a step of generating said source database by reconciliation of at least one database comprising values of at least one relative variable. audit object, with at least one other database comprising data relating to at least one past occurrence of at least one state. Such a step is necessary when the data relating to the target object is stored on different databases. For example, in the case of elevator type machines, it very often happens that the data measured by the sensors arranged on the elevator are stored in a first database and the data relating to the past failures of the elevator are stored in memory. in another database. In this case, it is necessary to construct a single database comprising both the data measured by the sensors and the past occurrences of a failure of the elevator.
[0008] According to a particularly preferred embodiment, for the target object, in particular for each target object, the data relating to said object are organized in the form of a timeline or timeline.
[0009] More particularly, the source database comprises for the target object, in particular for each target object, a timeline on which are indicated chronologically: the values of the measured variables, and the signaling of the occurrence of a state, in particular of each state, of the object, etc. for each state, data relating to an intervention, such as a repair or a replacement of the object or an organ of the object. More generally, for each target object, the source database can advantageously memorize for each measured value of a variable, at least one temporal datum relative to the moment of measurement of said value, and for each past occurrence of at least one, in particular of each state, a time datum relating to moment of said occurrence. According to an advantageous embodiment, at least one in particular each of the steps, in particular the learning step, and / or the selection step, and / or the prediction step, may take into account. counts data on a predetermined sliding time window preceding the current time. Thus, the method according to the invention makes it possible to make a prediction based, not on a snapshot of the values of the variables relating to the object, but on an evolution of the values of these variables. Such a prediction is more precise and finer. For example, a high instantaneous temperature value measured by a sensor of a machine is not necessarily a sign of a machine failure, it must take into account how the temperature has evolved. Indeed, if a regular rise in temperature may not be a sign of failure, a rapid temperature spike may be. The method according to the invention makes it possible to make a fine prediction making it possible to discriminate these cases. This makes it possible either to avoid false alarms or to avoid the non-detection of a future failure.
[0010] For at least one target object, the source database may further comprise: at least one calculated data based on one or more measured data and a predetermined relationship, such as for example summation, subtraction , an average, a variance, an integral or a derivative of one or more variables, for example over a predetermined time window, - at least one so-called exogenous datum relating to an environment in which the target object is located, such as for example a temperature external to said object, a humidity external to said object, a failure of an organ or apparatus with which said object is in relation or with which said object cooperates, etc. At least one classifier used in the present invention can implement: a decision tree, a vector-machine support, a clustering algorithm, that is to say a hierarchical grouping or partitioning algorithm, - a neural network, - a linear regression, - a decision tree set, of type "random forest" for example 5 - etc. Each of these classifiers is known to those skilled in the art in the field of prediction. It is not necessary here to detail the architecture of each of these classifiers.
[0011] For at least one classifier, the automatic learning step can achieve learning: supervised, unsupervised, semi-supervised, partially supervised, reinforcement, or transfer. Each of these learning techniques is also known to those skilled in the art as such. For the sake of brevity, therefore, they will not be detailed in this application. The prediction step may comprise a provision of at least one piece of data relating to the result of the prediction, in particular whatever the result of the prediction or only when the result of the prediction 25 testifies to the future realization of a state. predetermined. This step may further include displaying at least one data item when a future realization of a state is detected. Alternatively or additionally, this prediction step may comprise the display of an identification data item of the detected state, for example in the form of a message that is intelligible to humans. In addition, the prediction step may, in addition or alternatively, trigger an audible or visual warning when a predetermined state, for example a failure, is detected. The method according to the invention can be implemented for the prediction of a state among several predetermined states for an object. The method according to the invention can also be implemented for the prediction of a state for several objects, identical or different, arranged on the same site or on at least two sites distributed in space, that is, ie distant from each other. In this case, the method can be performed for each object independently of the others.
[0012] Alternatively, or in addition, for at least one object, the method may take into account at least one piece of data relating to another object or an organ of another object on the same site. For example, when the method is used for the prediction of a failure for elevators, it can be applied independently for each elevator, particularly when they are all distant from each other. On the other hand, in the case where two lifts are on the same site, in particular in the same building, the method can take into account at least one item relating to one of the lifts for the prediction of a breakdown of the other elevator and vice versa.
[0013] Advantageously, the method according to the invention can be applied for the prediction of a state of failure of a machine or a member of a machine. In this case, the measured variables relating to the machine may comprise at least one of the following variables: pressure, temperature, humidity, etc., in / around the machine, in / around an organ of the machine, etc. More generally, the method according to the invention can be applied to any machine equipped with sensor (s) and able to reassemble relative to the machine or a member of the machine on a regular basis (in particular, the connected objects).
[0014] The invention also relates to a computer program product comprising instructions implementing all the steps of the method according to the invention, when it is implemented or loaded in a computer apparatus.
[0015] Such a computer program product may include computer instructions written in all types of computer language, such as C, C ++, JAVA, etc.
[0016] The invention also relates to a system comprising means configured to implement all the steps of the method according to the invention. Such a system can be reduced to a computer, or more generally to an electronic / computer apparatus. DESCRIPTION OF THE FIGURES AND EMBODIMENTS Other advantages and features will appear on examining the detailed description of non-limitative examples, and the accompanying drawings in which: FIG. 1 is a diagrammatic representation of an exemplary embodiment non-limiting of a prediction method according to the invention; FIG. 2 is a diagrammatic representation of a nonlimiting example of a system according to the invention, in particular for implementing the method of FIG. 1; and FIGURES 3-4 show a schematic representation of a very simplified exemplary embodiment for predicting the state of operation of four machines. It is understood that the embodiments which will be described in the following are in no way limiting. It will be possible to imagine variants of the invention comprising only a selection of characteristics subsequently described isolated from the other characteristics described, if this selection of characteristics is sufficient to confer a technical advantage or to differentiate the invention with respect to the state of the prior art. This selection comprises at least one preferably functional feature without structural details, or with only a portion of the structural details if this portion alone is sufficient to confer a technical advantage or to differentiate the invention from the state. of the prior art. In particular, all the variants and all the embodiments described are combinable with each other if nothing stands in the way of this combination on a technical level. In the figures, the elements common to several figures retain the same reference. FIGURE 1 is a schematic representation of a nonlimiting exemplary embodiment of a prediction method according to the invention. The method 100 described in FIG. 1 will be described hereinafter in the context of an application example which is the detection of failures on elevators arranged on sites distributed in space. The method 100 shown in FIG. 1 comprises a phase 102, referred to as prior, carried out only at the beginning of the process 100. This preliminary phase 102 comprises an optional step 104 of generating a source database, in the form of a temporal frieze or timeline, for each elevator concerned by the prediction. The source database may be generated by measuring and detecting data, over a predetermined period, by sensors on each elevator. Alternatively, the source database can be generated by reconciliation of data previously stored in several databases, namely: at least one database comprising the values of different variables measured over time for each elevator, as well as for each measurement of the time stamp data indicating the time of the measurement, and - at least one database listing for each elevator the 30 past failures, as well as time stamp data indicating the time of the failure. The variables whose values are measured for each elevator may include the temperature, pressure, load carried by the elevator, the number of return trips made, distance traveled, etc. Of course, if the source database is existing, step 104 is not performed. The method 100 further comprises a step 106, optional, of enriching the source database with one or more variables obtained by processing the variables already existing in the database. For example, this step 106 can add in the database at least one variable obtained by applying a mathematical relation to at least one existing variable in the database, such as for example: a summation, a subtraction, multiplication and / or division, of at least two variables or at least two values of the same variable, a variance, a derivative, an integral, of at least one variable over a predetermined time window, in particular slippery, etc. The enrichment step 106 can additionally or alternatively comprise an addition to the database of at least one value of an exogenous variable relating to the environment of the elevator, such as, for example, the temperature outside the elevator, the number of floors served by the elevator, etc. Of course, this enrichment step 106 is also optional.
[0017] In a step 108, the method realizes a generation of at least two classifiers employing different classification algorithms. In the present example, the method realizes a generation of three classifiers, namely: a first classifier performing a classification by a decision tree, a second classifier performing a classification by a neuron network, a third classifier performing a classification. classification by partitioning, "data clustering" in English. In practice, this step 108 creates an instance of each of these classifiers according to the number of input data and the number of output states. In the present case, each classifier is instantiated to input 6 variables and make a prediction of a failure of each elevator, i.e., perform a classification in a single class corresponding to a single state, i.e. "State = failure". During a step 110, optional, it is possible to apply at least one parameter, called stress, concerning the architecture of a classifier.
[0018] In the present case, the step 110 fixes for the first classifier the value of a parameter of maximum depth and for the second classifier the value of a parameter of nodes, these values being predetermined by the user or an operator.
[0019] In a step 112, each classifier generated in step 108, is then trained with 60% of the data of the source database including for each state a multitude of past occurrences of a failure of each elevator. In the present example, the automatic learning performed is a supervised learning, that is, each occurrence of a failure is indicated as output to each classifier and the values of the variables measured before this failure are entered as input data. An optional step 114 makes it possible to cross-validate the automatic learning of each classifier by cross-validating the learning of each classifier, for example on 20% of the data of the classifier. the database. Of course, these 20% are different from the 60% of data used in step 112. This is a simple test step, to check the stability of the classifier. If the learning is not effective, the classifier will not be stable and will not be chosen for the future. The prior phase 102 then comprises a classifier selection step 116, which provides the best prediction result. To do this, each of the three classifiers is tested on the same 20% of the data of the source database. For each of the three classifiers, the following are measured: a piece of data, referred to as precision, relating to an error rate when detecting past occurrences of a failure state of each elevator: this precision data indicates errors during classification, such as, for example, not detecting a past failure or detecting a failure when it has not occurred; and - a datum, called recall, relating to the number of failures 10 past detected. Based on the value of the precision data and the value of the recall data for each classifier, the classifier providing the best detection performance is selected. In a step 118, the selected classifier, for example the first classifier, is stored as the best classifier. The other classifiers are also stored in this step 120. Preferentially, the learning steps 112-116 are performed taking into account the values of the measured variables, where appropriate calculated, in a sliding time window, of a value predetermined 20 such as a month or 15 days, going back in the past, and the end of which corresponds to the current moment or the time of the last measurement. The predetermined value of the time window may be predetermined or adjusted during a step, for example performed at the same time or before the step 104 of generating the source database.
[0020] The method 100 comprises, following the prior phase 102, at least one iteration of a phase 120, called detection. Phase 120 includes a step 122 of updating the source database over time. This step 122 adds in the associated timeline 30 each elevator the last values of the last measured variables, if any calculated, in association with hourly data indicating the time of measurement for each new value of each new variable. Phase 120 also includes a prediction step 124 with the best classifier based on data from the updated database. To do this, the last values added in the database, preferably with the values stored in the database prior to the updating step and found in the sliding time window, are given as input to the best classifier which provides a prediction datum, indicating a future occurrence or not of a state of failure of an elevator. The prediction step 124 may be performed after "n" updating step, with r-11, or at another frequency, for example temporal, for example every week, or at the request of an operator.
[0021] When the prediction data provides for an occurrence of a failure, the method according to the invention may comprise one or more audible or visual alerting steps intended for a local or remote operator. The method 100 of FIGURE 1 further comprises at least one iteration of a step 126, called verification step, to check in time that the best classifier remains the one which, among all the classifiers generated and stored in step 118 , provides the best prediction performance. To do this, this step 126 includes an iteration of the steps 112-116 described above with the database as updated at the time of performing the verification step. This verification step is performed after "n" iterations of the prediction step or of the prediction phase, with r-11, or at another frequency, for example temporal, for example every week, or on request of 'an operator. If the best classifier is still the one currently used then the method 100 resumes in step 122 with the best current classifier. In contrast, the process resumes at step 122 with the new best classifier, which is stored in place of the former best classifier.
[0022] FIG. 2 is a schematic representation of a non-limiting example of a system according to the invention, in particular configured for carrying out the method 100 of FIG. 1. The system 200 of FIG. 2 comprises a supervision module 202 for managing and coordinating the operation of the various modules of the system, namely: an optional module 204, configured to generate a source data base 206, by reconciliation of various existing databases and / or by data enrichment, particularly as described above with reference to steps 104 and 106; a module 208 for instantiating several classifiers, configured to create an instance of several classifiers, and possibly for adjusting at least one parameter relating to the architecture of at least one classifier, in particular as described above with reference steps 108 and 110; at least one drive module 210, configured to perform the automatic learning of each classifier, in particular as described above with reference to step 112; at least one optional cross-validation module 212 configured to perform cross-validation of each classifier, in particular as described above with reference to step 114; At least one selection module 214, configured to select the best classifier, in particular as described above with reference to step 116; at least one updating module 216, configured to update the source database in time, in particular as described above with reference to step 122; at least one prediction module 218, configured to provide prediction data concerning the future occurrence of a state, for example of a failure, in particular as described above with reference to step 124; and at least one verification module 220, configured to verify that the best classifier is still that used for the prediction, particularly as described above with reference to step 124. Although shown separate in FIGURE 2, several modules, and in particular all modules, can be integrated in a single module. The system 200 may be a computer, a processor, an electronic chip, or any other physically or software configurable means for performing the steps of the method of the invention. FIGURES 3-4 give a schematic representation of an example of a very simplified application of the method according to the invention to machines. The example shown in FIGURES 3-4 relates to four machines for which two variables are measured, one corresponding to the temperature T ° in the machine and the other to the pressure P in the machine.
[0023] The values of the variables measured and reported to a remote server of the machines, at least once a day, through an Internet-type communication network. At each ascent, the measured values of the variables are stored in a table, such as the table 300 shown in FIG. 3.
[0024] In Table 300, the measured values of variables T ° and P at a given moment show that the four machines exhibit different behaviors. The machines 1, 2 and 3 have normal operation and the machine 4 has an abnormal operation signifying a failure.
[0025] In the present example, to predict the behavior of each machine in the future, an instance of two different classifiers is created, namely an instance of a decision tree type classifier and an instance of a kMeans classifier. . From numerous measurements of the variables T ° and P reported in the past for each machine, and the past operating state, normal operation or abnormal operation of each machine, each classifier undergoes: a training with a first part, by example 60%, values reported, 3036515 - 19 - - then a cross-validation on a second part, for example 20%, of the values raised. Finally, the two classifiers are tested on a third part, the remaining 20%, values raised to determine the best classifier 5 for the prediction of the behavior of each of the four machines. For the sake of simplicity of description, in the present example, each of the two classifiers created is tested on the values indicated in Table 300. The result obtained is shown in FIG. 4 for each classifier. Thus, the decision tree classifier 402 detects the machine failure 4 and the normal operation of the other three machines, while the kMeans 404 classifier detects normal operation for two of the machines and a failure for them. two others. Therefore, the best classifier among the two classifiers tested is the decision tree classifier which is selected and used for future predictions of the operating status of these four machines. The example shown in FIGURES 3-4 is a very simplified example, given by way of illustration only. In a real case, the number of variables is much greater, of the order of a thousand variables, and the number of machines is also greater. Therefore, the size of the classifiers is also larger than the size of the classifiers shown in FIGURE 4.
[0026] Of course, the invention is not limited to the examples which have just been described and many adjustments can be made to these examples without departing from the scope of the invention.
权利要求:
Claims (15)
[0001]
CLAIMS1.Procédé (100) prediction of the realization of at least one state that can take an object, before said state is realized, from a database (206), said source, storing, for at least one past occurrence of said at least one state, values of at least one variable relating to said object, determined before said occurrence of said state, said method (1-00) comprising the following steps: - generation (108) of at least two classifiers according to two different data classification algorithms, - for each of said classifiers, automatic learning (112) on a first part of said source database (206), and - selection (116), among said classifiers, of a a classifier, said best classifier, providing the best prediction performance on a second part of said source database (206), by comparing the results provided by each classifier; said method (100) further comprising a so-called detection phase (120) comprising: - updating (122), over time, said source database with at least one new value of said variable, and at least one step (124) for predicting a state of said object, by said best classifier, from said updated source database (206).
[0002]
2.Procédé (100) according to claim 1, characterized in that it further comprises at least one iteration of a step (126), called verification, to check in time that the best classifier remains one which, among all classifiers generated, provides the best prediction performance, said verification step (126) comprising the learning (112) and selecting (116) steps performed on said database (206) updated at the time of said iteration of said verification step (126). 3036515 - 2 -
[0003]
3.Procédé (100) according to any one of the preceding claims, characterized in that the step of selection (116) of the best classifier comprises: - a measurement, for each classifier: 5 - a data, called precision , relating to an error rate when detecting past occurrences of at least one state, - a so-called recall datum relating to the number of past occurrences, of at least one state, detected by said classifier; the selection of the best classifier being performed according to said precision data and / or said recall data.
[0004]
4.Procédé (100) according to any one of the preceding claims, characterized in that it further comprises, after the automatic learning step (112), a step (114), called cross-validation, testing at least one, in particular each, classifier, on a third portion of said source database (126).
[0005]
5.Process (100) according to any one of the preceding claims, characterized in that, for at least one classifier, the generating step (108) comprises an adjustment / input step (110). a parameter relating to the architecture of said classifier, such as a maximum / minimum number of nodes and / or a maximum / minimum depth of said classifier.
[0006]
6.Procédé (100) according to any one of the preceding claims, characterized in that it comprises, before the automatic learning step (112), a step (104) for generating said source database ( 126) by reconciling at least one database comprising values of at least one variable relating to said object, with at least one other database comprising data relating to at least one occurrence of at least one state .
[0007]
7.Procédé (100) according to any one of the preceding fevendications, characterized in that the source database (126) stores 3036515 - 3 - - for each measured value of a variable at least one time data relating to the moment of measuring said value, and - for each past occurrence of at least one, in particular of each state, a temporal datum relating to the moment of said occurrence.
[0008]
8.Procédé (100) according to any one of the preceding claims, characterized in that at least one in particular each of the steps, in particular the learning step (112), and / or the selection step ( 116), and / or the prediction step (124), takes into account data on a predetermined sliding time window preceding the present moment.
[0009]
9.Procédé (100) according to any one of the preceding claims, characterized in that the source database (126) comprises: - at least one datum calculated according to one or more measured data and a relation predetermined, at least one so-called exogenous datum relating to an environment in which said target object is located. 20
[0010]
10.Procédé (100) according to any one of the preceding claims, characterized in that at least one classifier is: - a decision tree, - a vector-machine support, - a clustering algorithm, that is to say say a hierarchical grouping or partitioning algorithm.
[0011]
11.Procédé (100) according to any one of the preceding claims, characterized in that for at least one classifier, the automatic learning step (112) can achieve a learning: 30 - supervised, - unsupervised, - semi -supervised, - partially supervised, - by reinforcement, or 3036515 - 4 - - by transfer.
[0012]
12.Procédé (100) according to any one of the preceding claims, characterized in that it is implemented for the prediction of the realization 5 of at least one state for several objects arranged on the same site or at least two sites distributed in space.
[0013]
13.Procédé (100) according to any one of the preceding claims, characterized in that it is implemented for the prediction of a state of failure 10 of a machine or an organ of a machine.
[0014]
A computer program product comprising instructions implementing all the steps of the method according to any one of the preceding claims, when implemented or loaded into a computer apparatus.
[0015]
15. System (200) comprising means configured to carry out all the steps of the method according to any one of claims 1 to 13.
类似技术:
公开号 | 公开日 | 专利标题
EP2368161B1|2016-11-09|Detection of anomalies in an aircraft engine
CA2746543C|2018-01-02|Identification of defects in an aircraft engine
EP3123139B1|2018-05-23|Method for estimating the normal or abnormal character of a measured value of a physical parameter of an aircraft motor
EP0573357A1|1993-12-08|Diagnostic procedure for an on-going process
FR3036515A1|2016-11-25|METHOD AND SYSTEM FOR PREDICTING THE REALIZATION OF A PREDETERMINED STATE OF AN OBJECT.
CA2888716C|2020-10-06|System for monitoring a set of components of a device
FR3082963A1|2019-12-27|SYSTEM AND METHOD FOR EVALUATING AND DEPLOYING NON-SUPERVISED OR SEMI-SUPERVISED AUTOMATIC LEARNING MODELS
FR3028331A1|2016-05-13|METHOD FOR MONITORING AN AIRCRAFT ENGINE IN OPERATION IN A GIVEN ENVIRONMENT
FR3052273A1|2017-12-08|PREDICTION OF TROUBLES IN AN AIRCRAFT
FR2816078A1|2002-05-03|Machine or system monitoring using cumulative and empirical distribution norms, uses data comparison with stochastic processing model to provide quantitative and qualitative data about the system
EP3097455A1|2016-11-30|Method for predicting an operational malfunction in the equipment of an aircraft or aircraft fleet
WO2016012972A1|2016-01-28|Method for detecting anomalies in a distribution network, in particular for drinking water
FR2957170A1|2011-09-09|Equipment monitoring system designing tool for engine of aircraft, involves processing unit evaluating quantification of monitoring system based on quality value of result corresponding to output quality value associated with output module
EP3499431A1|2019-06-19|Electronic device for processing signals with built-in optimisation of electric energy consumption and corresponding method
FR3035232A1|2016-10-21|SYSTEM FOR MONITORING THE HEALTH CONDITION OF AN ENGINE AND ASSOCIATED CONFIGURATION METHOD
EP3511781B1|2020-08-26|Device and method of collecting, docketing, analysing as well as providing the results of the analysis of data relating to watch pieces
EP3846087A1|2021-07-07|Method and system for selecting a learning model within a plurality of learning models
FR3062733A1|2018-08-10|METHOD FOR MONITORING EQUIPMENT OF ELECTROMECHANICAL ACTUATOR TYPE
EP3846047A1|2021-07-07|Method and system for identifying relevant variables
FR3046265A1|2017-06-30|SYSTEM FOR MONITORING AN INDUSTRIAL INSTALLATION; ASSOCIATED CONFIGURATION AND MONITORING METHODS
EP3846091A1|2021-07-07|Method and system for design of a predictive model
EP3846046A1|2021-07-07|Method and system for processing data for the preparation of a data set
FR3102905A1|2021-05-07|Method for detecting blocking of metering devices on a distribution network
FR3095879A1|2020-11-13|DESIGN PROCESS OF A SIGNATURE GENERATOR OF AN ENTITY PERFORMING ACTIONS IN A COMPUTER ARCHITECTURE, ABNORMAL BEHAVIOR DETECTION PROCESS, COMPUTER PROGRAM AND CORRESPONDING SYSTEM
FR3098967A1|2021-01-22|Method and device for determining an estimated time before a technical incident in an IT infrastructure from the values of performance indicators
同族专利:
公开号 | 公开日
FR3036515B1|2019-01-25|
US20180129947A1|2018-05-10|
EP3298549A1|2018-03-28|
WO2016184912A1|2016-11-24|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
WO2014075108A2|2012-11-09|2014-05-15|The Trustees Of Columbia University In The City Of New York|Forecasting system using machine learning and ensemble methods|
FR3076267B1|2018-01-04|2020-01-17|Safran Electronics & Defense|METHOD FOR DIAGNOSING A CONDITION OF WEAR OF AN AIRCRAFT PARKING BRAKE|
US10921777B2|2018-02-15|2021-02-16|Online Development, Inc.|Automated machine analysis|
US20210012194A1|2019-07-11|2021-01-14|Samsung Electronics Co., Ltd.|Method and system for implementing a variable accuracy neural network|
法律状态:
2016-05-26| PLFP| Fee payment|Year of fee payment: 2 |
2016-11-25| PLSC| Search report ready|Effective date: 20161125 |
2017-05-31| PLFP| Fee payment|Year of fee payment: 3 |
2018-05-28| PLFP| Fee payment|Year of fee payment: 4 |
2020-02-14| ST| Notification of lapse|Effective date: 20200108 |
优先权:
申请号 | 申请日 | 专利标题
FR1554441A|FR3036515B1|2015-05-19|2015-05-19|METHOD AND SYSTEM FOR PREDICTING THE REALIZATION OF A PREDETERMINED STATE OF AN OBJECT.|
FR1554441|2015-05-19|FR1554441A| FR3036515B1|2015-05-19|2015-05-19|METHOD AND SYSTEM FOR PREDICTING THE REALIZATION OF A PREDETERMINED STATE OF AN OBJECT.|
EP16729196.2A| EP3298549A1|2015-05-19|2016-05-18|Method and system for predicting the realization of a predetermined state of an object|
PCT/EP2016/061138| WO2016184912A1|2015-05-19|2016-05-18|Method and system for predicting the realization of a predetermined state of an object|
US15/574,255| US20180129947A1|2015-05-19|2016-05-18|Method and system for predicting the realization of a predetermined state of an object|
[返回顶部]